Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Automated English essay scoring method based on multi-level semantic features
ZHOU Xianbing, FAN Xiaochao, REN Ge, YANG Yong
Journal of Computer Applications    2021, 41 (8): 2205-2211.   DOI: 10.11772/j.issn.1001-9081.2020101572
Abstract551)      PDF (935KB)(390)       Save
The Automated Essay Scoring (AES) technology can automatically analyze and score the essay, and has become one of the hot research problems in the application of natural language processing technology in the education field. Aiming at the current AES methods that separate deep and shallow semantic features, and ignore the impact of multi-level semantic fusion on essay scoring, a neural network model based on Multi-Level Semantic Features (MLSF) was proposed for AES. Firstly, Convolutional Neural Network (CNN) was used to capture local semantic features, and the hybrid neural network was used to capture global semantic features, so that the essay semantic features were obtained from a deep level. Secondly, the feature of the topic layer was obtained by using the essay topic vector of text level. At the same time, aiming at the grammatical errors and language richness features that are difficult to mine by deep learning model, a small number of artificial features were constructed to obtain the linguistic features of the essay from the shallow level. Finally, the essay was automatically scored through the feature fusion. Experimental results show that the proposed model improves the performance significantly on all subsets of the public dataset of the Kaggle ASAP (Automated Student Assessment Prize) champion, with the average Quadratic Weighted Kappa (QWK) of 79.17%, validating the effectiveness of the model in the AES tasks.
Reference | Related Articles | Metrics
Synthetic aperture radar ship detection method based on self-adaptive and optimal features
HOU Xiaohan, JIN Guodong, TAN Lining, XUE Yuanliang
Journal of Computer Applications    2021, 41 (7): 2150-2155.   DOI: 10.11772/j.issn.1001-9081.2020081187
Abstract333)      PDF (1428KB)(207)       Save
In order to solve the problem of poor small target detection effect in Synthetic Aperture Radar (SAR) target ship detection, a self-adaptive anchor single-stage ship detection method was proposed. Firstly, on the basis of Feature Selective Anchor-Free (FSAF) algorithm, the optimal feature fusion method was obtained by using the Neural Architecture Search (NAS) to make full use of the image feature information. Secondly, a new loss function was proposed to solve the imbalance of positive and negative samples while enabling the network to regress the position more accurately. Finally, the final detection results were obtained by combining the Soft-NMS filtering detection box which is more suitable for ship detection. Several groups of comparison experiments were conducted on the open SAR ship detection dataset. Experimental results show that, compared with the original target detection algorithm, the proposed method significantly reduces the missed detections and false positives of small targets, and improves the detection performance for inshore ships to a certain extent.
Reference | Related Articles | Metrics
Instance selection algorithm for big data based on random forest and voting mechanism
ZHOU Xiang, ZHAI Junhai, HUANG Yajie, SHEN Ruicai, HOU Yingzhen
Journal of Computer Applications    2021, 41 (1): 74-80.   DOI: 10.11772/j.issn.1001-9081.2020060982
Abstract494)      PDF (906KB)(491)       Save
To deal with the problem of instance selection for big data, an instance selection algorithm based on Random Forest (RF) and voting mechanism was proposed for big data. Firstly, a dataset of big data was divided into two subsets:the first subset is large and the second subset is small or medium. Then, the first large subset was divided into q smaller subsets, and these subsets were deployed to q cloud computing nodes, and the second small or medium subset was broadcast to q cloud computing nodes. Next, the local data subsets at different nodes were used to train the random forest, and the random forest was used to select instances from the second small or medium subset. The selected instances at different nodes were merged to obtain the subset of selected instances of this time. The above process was repeated p times, and p subsets of selected instances were obtained. Finally, these p subsets were used for voting to obtain the final selected instance set. The proposed algorithm was implemented on two big data platforms Hadoop and Spark, and the implementation mechanisms of these two big data platforms were compared. In addition, the comparison between the proposed algorithm with the Condensed Nearest Neighbor (CNN) algorithm and the Reduced Nearest Neighbor (RNN) algorithm was performed on 6 large datasets. Experimental results show that compared with these two algorithms, the proposed algorithm has higher test accuracy and smaller time consumption when the dataset is larger. It is proved that the proposed algorithm has good generalization ability and high operational efficiency in big data processing, and can effectively solve the problem of big data instance selection.
Reference | Related Articles | Metrics
Virtual-real registration method of natural features based on binary robust invariant scalable keypoints and speeded up robust features
ZHOU Xiang, TANG Liyu, LIN Ding
Journal of Computer Applications    2020, 40 (5): 1403-1408.   DOI: 10.11772/j.issn.1001-9081.2019091621
Abstract343)      PDF (1572KB)(331)       Save

Concerning the problem that the accuracy and real-time effects of virtual-real registration in Augmented Reality (AR) based on vision are greatly affected by the changes of illumination, occlusion and perspective, which is easy to lead to failure of registration, a virtual-real registration method of natural features based on Binary Robust Invariant Scalable Keypoints-Speeded Up Robust Features (BRISK-SURF) algorithm was proposed. Firstly, Speeded Up Robust Features (SURF) feature extractor was used to detect the feature points. Then, Binary Robust Invariant Scalable Keypoints (BRISK) descriptor was used to describe the feature points in binary, and the feature points were matched accurately and efficiently by combining Hamming distance. Finally, the virtual-real registration was realized according to the homography relationship between images. Experiments were performed from the aspects of image feature matching and virtual-real registration. Results show that the average precision of BRISK-SURF algorithm is basically the same with that of SURF algorithm, is about 25% higher than that of BRISK algorithm, and the average recall of BRISK-SURF is increased by about 10% compared to that of BRISK algorithm; the result of the virtual-real registration method based on BRISK-SURF is close to the reference standard data with high precision and good real-time performance. The Experimental results illustrate that the proposed method has high recognition accuracy, registration precision and real-time effects for images with different illuminations, occlusions and perspectives. Besides, the interactive tourist resource presentation and experience system based on AR is realized by using the proposed method.

Reference | Related Articles | Metrics
Green vehicle routing problem optimization for multi-type vehicles considering traffic congestion areas
ZHAO Zhixue, LI Xiamiao, ZHOU Xiancheng
Journal of Computer Applications    2020, 40 (3): 883-890.   DOI: 10.11772/j.issn.1001-9081.2019071306
Abstract486)      PDF (703KB)(523)       Save
In order to reduce the carbon emission of vehicles during the process of logistics distribution, on the perspective of green environmental protection, a Green Vehicle Routing Problem (GVRP) of logistics distribution vehicles with multi-type vehicles considering traffic congestion areas was analyzed. Firstly, the effect of multi-type vehicles and different traffic congestion situations on the vehicle route planning was investigated. Secondly, the metric function of carbon emission rate was introduced on the basis of vehicle speed and load. Thirdly, a dual-objective green vehicle routing model with minimizing the vehicle management cost as well as the fuel consumption and carbon emission cost as optimization objects was established. Finally, a hybrid differential evolution algorithm combined with simulated annealing algorithm was designed to solve the problem. Simulation results verify that the model and algorithm can effectively avoid the congestion areas. Compared to the simulation results only using 4 t vehicles for distribution, the proposed model has the total cost reduced by 1.5%, and the fuel consumption and carbon emission cost decreased by 4.3%. Compared the model with optimization objective of shortest driving distance, the proposed model has the total distribution cost decreased by 8.1%, demonstrating that the model can improve the economic benefits of logistics enterprises and promote the energy saving and emission reduction. At the same time, compared with the basic differential algorithm, the hybrid differential evolution algorithm with simulated annealing algorithm can reduce the total transportation cost by 3% to 6%; compared with the genetic algorithm, the proposed algorithm has more obvious optimization effect, and has the total transportation cost reduced by 4% to 11%, proving the superiority of the algorithm. In summary, the proposed model and algorithm can provide effective advices for the urban distribution routing decision of logistics enterprises.
Reference | Related Articles | Metrics
Agricultural greenhouse temperature prediction method based on improved deep belief network
ZHOU Xiangyu, CHENG Yong, WANG Jun
Journal of Computer Applications    2019, 39 (4): 1053-1058.   DOI: 10.11772/j.issn.1001-9081.2018091876
Abstract425)      PDF (890KB)(305)       Save
Concerning low representation ability and long learning time for complex and variable environmental factors in greenhouses, a prediction method based on improved Deep Belief Network (DBN) combined with Empirical Mode Decomposition (EMD) and Gated Recurrent Unit (GRU) was proposed. Firstly, the temperature environment factor was decomposed by EMD, and then the decomposed intrinsic mode function and residual signal were predicted at different degrees. Secondly, glia was introduced to improve DBN, and the decomposition signal was used to multi-attribute feature extraction combined with illumination and carbon dioxide. Finally, the signal components predicted by GRU were added together to obtain the final prediction result. The simulation results show that compared with empirical decomposition belief network (EMD-DBN) and glial DBN-glial chains (DBN-g), the prediction error of the proposed method is reduced by 6.25% and 5.36% respectively, thus verifying its effectiveness and feasibility of predictions in greenhouse time series environment with strong noise and coupling.
Reference | Related Articles | Metrics
Improved attribute reduction algorithm and its application to prediction of microvascular invasion in hepatocellular carcinoma
TAN Yongqi, FAN Jiancong, REN Yande, ZHOU Xiaoming
Journal of Computer Applications    2019, 39 (11): 3221-3226.   DOI: 10.11772/j.issn.1001-9081.2019051108
Abstract416)      PDF (896KB)(238)       Save
Focused on the issue that the attribute reduction algorithm based on neighborhood rough set only considers the influence of a single attribute on the decision attribute, and fails to consider the correlation among different attributes, a Neighborhood Rough Set attribute reduction algorithm based on Chi-square test (ChiS-NRS) was proposed. Firstly, the Chi-square test was used to calculate the correlation, and the influence between the related attributes was considered when selecting the important attributes, making the time complexity reduced and the classification accuracy improved. Then, the improved algorithm and the Gradient Boosting Decision Tree (GBDT) algorithm were combined to establish a classification model and the model was verified on UCI datasets. Finally, the proposed model was applied to predict the occurrence of microvascular invasion in hepatocellular carcinoma. The experimental results show that the proposed algorithm has the highest classification accuracy on some UCI datasets compared with the reduction algorithm without reduction and neighborhood rough set reduction algorithm. In the prediction of microvascular invasion in hepatocellular carcinoma, compared with Convolution Neural Network (CNN), Support Vector Machine (SVM) and Random Forest (RF) prediction models, the proposed model has the prediction accuracy of 88.13% in test set, the sensitivity, specificity and the Area Under Curve (AUC) of Receiver Operating Curve (ROC) of 88.89%, 87.5% and 0.90 respectively are the best. Therefore, the prediction model proposed can better predict the occurrence of microvascular invasion in hepatocellular carcinoma and assist doctors to make more accurate diagnosis.
Reference | Related Articles | Metrics
Key frame extraction of motion video based on spatial-temporal feature locally preserving
SHI Nianfeng, HOU Xiaojing, ZHANG Ping
Journal of Computer Applications    2017, 37 (9): 2605-2609.   DOI: 10.11772/j.issn.1001-9081.2017.09.2605
Abstract608)      PDF (847KB)(423)       Save
To improve the motion expression and compression rate of the motion video key frames, a dynamic video frame extraction technique based on flexible pose estimation and spatial-temporal feature embedding was proposed. Firstly, a Spatial-Temporal feature embedded Flexible Mixture-of-Parts articulated human model (ST-FMP) was designed by preserving the spatial-temporal features of body parts, and the N-best algorithm was adopted with spatial-temporal locally preserving of uncertain body parts to estimate the body configuration in a single frame based on ST-FMP. Then, the relative position and motion direction of the human body were used to describe the characteristics of the human body motion. The Laplacian scoring algorithm was used to implement dimensionality reduction to obtain the discriminant human motion feature vector with local topological structure. Finally, the ISODATA (Iterative Self-Organizing Data Analysis Technique) algorithm was used to dynamically determine the key frames. In the key frame extraction experiment on aerobics video, compared to articulated human model with Flexible Mixture-of-Parts (FMP) and motion block, the accuracy of uncertain body parts by using ST-FMP was 15 percentage points higher than that by using FMP, achieved 81%, which was higher than that by using Key Frames Extraction based on prior knowledge (KFE) and key frame extraction based on motion blocks. The experimental results on key frame extraction for calisthenics video show that the proposed approach is sensitive to motion feature selection and human pose configuration, and it can be used for sports video annotation.
Reference | Related Articles | Metrics
Improved algorithm of artificial bee colony based on Spark
ZHAI Guangming, LI Guohe, WU Weijiang, HONG Yunfeng, ZHOU Xiaoming, WANG Jing
Journal of Computer Applications    2017, 37 (7): 1906-1910.   DOI: 10.11772/j.issn.1001-9081.2017.07.1906
Abstract533)      PDF (766KB)(486)       Save
To combat low efficiency of Artificial Bee Colony (ABC) algorithm on solving combinatorial problem, a parallel ABC optimization algorithm based on Spark was presented. Firstly, the bee colony was divided into subgroups among which broadcast was used to transmit data, and then was constructed as a resilient distributed dataset. Secondly, a series of transformation operators were used to achieve the parallelization of the solution search. Finally, gravitational mass calculation was used to replace the roulette probability selection and reduce the time complexity. The simulation results in solving the Traveling Salesman Problem (TSP) prove the feasibility of the proposed parallel algorithm. The experimental results show that the proposed algorithm provides a 3.24x speedup over the standard ABC algorithm and its convergence speed is increased by about 10% compared with the unimproved parallel ABC algorithm. It has significant advantages in solving high dimensional problems.
Reference | Related Articles | Metrics
Feature detection and description algorithm based on ORB-LATCH
LI Zhuo, LIU Jieyu, LI Hui, ZHOU Xiaogang, LI Weipeng
Journal of Computer Applications    2017, 37 (6): 1759-1762.   DOI: 10.11772/j.issn.1001-9081.2017.06.1759
Abstract656)      PDF (794KB)(644)       Save
The binary descriptor based on Learned Arrangements of Three Patch Codes (LATCH) lacks of scale invariance and its rotation invariance depends upon feature detector, so a new feature detection and description algorithm was proposed based on Oriented fast and Rotated Binary robust independent elementary feature (ORB) and LATCH. Firstly, the Features from Accelerated Segment Test (FAST) was adopted to detect corner feature on the scale space of image pyramid. Then, the intensity centroid method of ORB was used to obtain orientation compensation. Finally, the LATCH was used to describe the feature. The experimental results indicate that, the proposed algorithm has the characteristics of low computational complexity, high real-time performance, rotation invariance and scale invariance. Under the same accuracy, the recall rate of the proposed algorithm is better than ORB and HARRIS-LATCH algorithm, the matching inner rate of the proposed algorithm is higher than ORB algorithm by 4.2 percentage points. In conclusion, the proposed algorithm can reduce the performance gap with histogram based algorithms such as Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Feature (SURF) while maintaining the real-time property, and it can deal with image sequence in real-time quickly and exactly.
Reference | Related Articles | Metrics
Learning-based performance monitoring and analysis for Spark in container environments
PI Aidi, YU Jian, ZHOU Xiaobo
Journal of Computer Applications    2017, 37 (12): 3586-3591.   DOI: 10.11772/j.issn.1001-9081.2017.12.3586
Abstract528)      PDF (985KB)(753)       Save
The Spark computing framework has been adopted as the framework for big data analysis by an increasing number of enterprises. However, the complexity of the system is increased due to the characteristic that it is typically deployed in distributed and cloud environments. Therefore, it is always considered to be difficult to monitor the performance of the Spark framework and finding jobs that lead to performance degradation. In order to solve this problem, a real-time monitoring and analysis method for Spark performance in distributed container environment was proposed and compiled. Firstly, the resource consumption information of jobs at runtime was acquired and integrated through the implantation of code in Spark and monitoring of Application Program Interface (API) files in Docker containers. Then, the Gaussian Mixture Model (GMM) was trained based on job history information of Spark. Finally, the trained model was used to classify the resource consumption information of Spark jobs at runtime and find jobs that led to performance degradation. The experimental results show that, the proposed method can detect 90.2% of the abnormal jobs and it only introduces 4.7% degradation to the performance of Spark jobs. The proposde method can lighten the burden of error checking and help users find the abnormal jobs of Spark in a shorter time.
Reference | Related Articles | Metrics
User discovery based on loyalty in social networks
XUE Yun, LI Guohe, WU Weijiang, HONG Yunfeng, ZHOU Xiaoming
Journal of Computer Applications    2017, 37 (11): 3095-3100.   DOI: 10.11772/j.issn.1001-9081.2017.11.3095
Abstract479)      PDF (869KB)(491)       Save
Aiming at improving the users' high viscosity in social networks, an algorithm based on user loyalty in social network system was proposed. In the proposed algorithm, double Recency Frequency Monetary (RFM) model was used for mining the different loyalty kinds of users. Firstly, according to the double RFM model, the users' consumption value and behavior value were calculated dynamically and the loyalty in a certain time was got. Secondly, the typical loyal users and disloyal users were found out by using the founded standard curve and similarity calculation. Lastly, the potential loyal and disloyal users were found out by using modularity-based community discovery and independent cascade propagation model. On some microblog datasets of a social network, the quantitative representation of user loyalty was confirmed in Social Network Service (SNS), thus the users could be distinguished based on users' loyalty. The experimental results show that the proposed algorithm can be used to effectively dig out different loyalty kinds of users, and can be applied to personalized recommendation, marketing, etc. in the social network system.
Reference | Related Articles | Metrics
Robot tool calibration method based on camera space point constraint
DU Shanshan, ZHOU Xiang
Journal of Computer Applications    2015, 35 (9): 2678-2681.   DOI: 10.11772/j.issn.1001-9081.2015.09.2678
Abstract462)      PDF (545KB)(353)       Save
The tool calibration means calculating the transformation matrix of the tool coordinate system relative to the end of the robot coordinate system. Traditional solution realizes point constraint by manual teaching. A calibration method based on visual camera space positioning was proposed. It used the camera to build the relation between the 3D space of the robot and the 2D space of camera to achieve the point constraints of the center point of the rings marks which were used as feature points and stuck on the edge factor. Visual positioning did not need camera calibration and other tedious process. The Tool Center Point (TCP) was figured out based on the forward kinematics derivation process of the robot and camera space point constraint. The calibration error of repeated experiments was less than 0.05 mm, and the absolute positioning error was less than 0.1 mm. The experimental results verify that the tool calibration based on camera space positioning has high repeatability and reliability.
Reference | Related Articles | Metrics
Entity recognition of clothing commodity attributes
ZHOU Xiang, LI Shaobo, YANG Guanci
Journal of Computer Applications    2015, 35 (7): 1945-1949.   DOI: 10.11772/j.issn.1001-9081.2015.07.1945
Abstract832)      PDF (769KB)(688)       Save

For the entity recognition of commodity attributes in clothing commodity title, a hybrid method combining Conditional Random Field (CRF) with entity boundary detecting rules was proposed. Firstly, the hidden entity hint character messages were obtained through a statistical method; secondly, statistical word indicators and their implications were interpreted with a granularity of character; thirdly, entity boundary detecting rules was proposed based on the entity hint characters and statistical word indicators; finally, a method for identifying threshold values in rules was proposed based on empirical risk minimization. In the comparison experiments with character-based CRF models, the overall precision, recall and F1 score were increased by 1.61%, 2.54% and 2.08% respectively, which validated the efficiency of the entity boundary detecting rule. The proposed method can be used in e-commerce Information Retrieval (IR), e-commerce Information Extraction (IE) and query intention identification, etc.

Reference | Related Articles | Metrics
Cooperative spectrum sharing method based on spectrum contract
ZHAO Nan, WU Minghu, ZHOU Xianjun, XIONG Wei, ZENG Chunyan
Journal of Computer Applications    2015, 35 (7): 1805-1808.   DOI: 10.11772/j.issn.1001-9081.2015.07.1805
Abstract576)      PDF (749KB)(38228)       Save

To alleviate the shortage of licensed spectrum resource, a method to design and implement the multi-user Cooperative Spectrum Sharing (CSS) mechanism was proposed based on the characteristics of asymmetric network information and selfishness of communication users. First, by modeling the CSS as a labor market, a modeling method for the multi-user contract-based CSS framework was investigated under the symmetric network information scenario. Then, to avoid the moral hazard problem due to the hidden-action of Secondary Users (SUs) after contract assignment, a contract-based CSS model was proposed to incentivize the contribution of SUs for ensuring spectrum sharing. The experimental results show that, when the direct transmission rate of Primary User (PU) is less than 0.2 b/s, in comparison with the case of non-cooperative spectrum sharing, the capacity of network is more than 3 times larger. The proposed multi-user contract-based CSS framework will put forward new ideas for efficient sharing and utilization of spectrum resource.

Reference | Related Articles | Metrics
Noise-suppression method for flicker pixels in dynamic outdoor scenes based on ViBe
ZHOU Xiao, ZHAO Feng, ZHU Yanlin
Journal of Computer Applications    2015, 35 (6): 1739-1743.   DOI: 10.11772/j.issn.1001-9081.2015.06.1739
Abstract675)      PDF (950KB)(424)       Save

Visual Background extractor (ViBe)model for moving target detection cannot avoid interference caused by irregular flicker pixels noise in dynamic outdoor scenes. In order to solve the issue, a flicker pixels noise-suppression method based on ViBe model algorithm was proposed. In the initial stage of background model, a fixed standard deviation of background model samples was used as the threshold value to limit the range of background model samples and get suitable background model samples for each pixel. In the foreground detection stage, an adaptive detection threshold was applied to improve the accuracy of detection result. Edge inhibition of image edge background pixels was executed to avoid error background sample values updating to the background model in the background model update process. On the basis of above, morphological operation was added to fix connected components to get more complete foreground images. Finally, the proposed method was compared with the original ViBe algorithm and the ViBe's improvement with morphology post-processing on the results of multiple video sequences. The experimental results show that the flicker pixels noise-suppression method can suppress flicker pixels noise effectively and get more accurate results.

Reference | Related Articles | Metrics
Big data benchmarks: state-of-art and trends
ZHOU Xiaoyun, QIN Xiongpai, WANG Qiuyue
Journal of Computer Applications    2015, 35 (4): 1137-1142.   DOI: 10.11772/j.issn.1001-9081.2015.04.1137
Abstract459)      PDF (1039KB)(639)       Save

A big data benchmark is needed eagerly by customers, industry and academia, to evaluate big data systems, improve current techniques and develop new techniques. A number of prominent works in last several years were reviewed. Their characteristics were introduced and the shortcomings were analyzed. Based on that, some suggestions on building a new big data benchmark are provided, including: 1) component based benchmarks as well as end-to-end benchmarks should be used in combination to test different tools inside the system and test the system as a whole, while component benchmarks are ingredients of the whole big data benchmark suite; 2) workloads should be enriched with complex analytics to encompass different application requirements, besides SQL queries; 3) other than performance metrics (response time and throughput), some other metrics should also be considered, including scalability, fault tolerance, energy saving and security.

Reference | Related Articles | Metrics
FP-MFIA: improved algorithm for mining maximum frequent itemsets based on frequent-pattern tree
YANG Pengkun, PENG Hui, ZHOU Xiaofeng, SUN Yuqing
Journal of Computer Applications    2015, 35 (3): 775-778.   DOI: 10.11772/j.issn.1001-9081.2015.03.775
Abstract590)      PDF (591KB)(633)       Save

Focusing on the drawback that Discovering Maximum Frequent Itemsets Algorithm (DMFIA) has to generate lots of maximum frequent candidate itemsets in each dimension when given datasets with many candidate items and each maximum frequent itemset is not long, an improved Algorithm for mining Maximum Frequent Itemsets based of Frequent-Pattern tree (FP-MFIA) for mining maximum frequent itemsets based on FP-tree was proposed. According to Htable of FP-tree, this algorithm used bottom-up searches to mine maximum frequent itemsets, thus accelerated the count of candidates. Producing infrequent itemsets with lower dimension according to conditional pattern base of every layer when mining, cutting and reducing dimensions of candidate itemsets can largely reduce the amount of candidate itemsets. At the same time taking full advantage of properties of maximum frequent itemsets will reduce the search space. The time efficiency of FP-MFIA is at least two times as much as the algorithm of DMFIA and BDRFI (algorithm for mining frequent itemsets based on dimensionality reduction of frequent itemset) according to computational time contrast based on different supports. It shows that FP-MFIA has a clear advantage when candidate itemsets are with high dimension.

Reference | Related Articles | Metrics
Human performance model with temporal constraint in human-computer interaction
ZHOU Xiaolei
Journal of Computer Applications    2015, 35 (2): 578-584.   DOI: 10.11772/j.issn.1001-9081.2015.02.0578
Abstract464)      PDF (1193KB)(411)       Save

Focusing on the issue that the prediction model for task accuracy is deficient in the relationship of speed-accuracy tradeoff in human computer interaction, a method of predictive model for accuracy based on temporal constraint was proposed. The method studied the relationship between task accuracy and specified temporal constraint when users tried to complete the task with a specified amount of time in the computer user interface by controlled experiments, which was used to measure the human performance in temporal constraint tasks. A series of steering tasks with temporal constraint were designed in the experiment, which manipulated the tunnel amplitude, tunnel width and specified movement time. The dependent variable in the experiment was the task accuracy, which was quantifiable as lateral deviation of the trajectory. It was pointed out that the task accuracy was linearly related to tunnel width and steering speed (indicated as specified movement time divided by tunnel amplitude) by analyzing the experimental data from 30 participants. Finally, a quantitative model was established to predict the task accuracy based on the least-square regression in steering tasks with temporal constraint. The proposed model has a good fit with the real dataset, the goodness of fit is 0.857.

Reference | Related Articles | Metrics
Real-time human identification algorithm based on dynamic electrocardiogram signals
LU Yang, BAO Shudi, ZHOU Xiang, CHEN Jinheng
Journal of Computer Applications    2015, 35 (1): 262-264.   DOI: 10.11772/j.issn.1001-9081.2015.01.0262
Abstract556)      PDF (603KB)(505)       Save

Electrocardiogram (ECG) signal has attracted widespread interest for the potential use in biometrics due to its ease-of-monitoring and individual uniqueness. To address the accuracy and real-time performance problem of human identification, a fast and robust ECG-based identification algorithm was proposed in this study, which was particularly suitable for miniaturized embedded platforms. Firstly, a dynamic-threshold method was used to extract stable ECG waveforms as template samples and test samples; then, based on a modified Dynamic Time Warping (DTW) method, the degree of difference between matching samples was calculated to reach a result of recognition. Considering that ECG is a kind of time-varying and non-stationary signals, ECG template database should be dynamically updated to ensure the consistency of the template and body status and further improve recognition accuracy and robustness. The analysis results with MIT-BIH Arrhythmia database and own experimental data show that the proposed algorithm has an accuracy rate at 98.6%. On the other hand, the average running times of dynamic threshold setting and optimized DTW algorithms on Android mobile terminals are about 59.5 ms and 26.0 ms respectively, which demonstrates a significantly improved real-time performance.

Reference | Related Articles | Metrics
Implementation and performance analysis of Knuth39 parallelization based on many integrated core platform
ZHANG Baodong, ZHOU Jinyu, LIU Xiao, HUA Cheng, ZHOU Xiaohui
Journal of Computer Applications    2015, 35 (1): 58-61.   DOI: 10.11772/j.issn.1001-9081.2015.01.0058
Abstract399)      PDF (588KB)(428)       Save

To solve the low running speed problem of Knuth39 random number generator, a Knuth39 parallelization method based on Many Integrated Core (MIC) platform was proposed. Firstly, the random number sequence of Knuth39 generator was divided into subsequences by regular interval. Then, the random numbers were generated by every thread from the corresponding subsequence's starting point. Finally, the random number sequences generated by all threads were combined into the final sequence. The experimental results show that the parallelized Knuth39 generator successfully passed 452 tests of TestU01, the results are the same as those of Knuth39 generator without parallelization. Compared with single thread on Central Processing Unit (CPU), the optimal speed-up ratio on MIC platform is 15.69 times. The proposed method improves the running speed of Knuth39 generator effectively, ensures the randomness of the generated sequences, and it is more suitable for high performance computing.

Reference | Related Articles | Metrics
Bird sounds recognition based on Radon and translation invariant discrete wavelet transform
ZHOU Xiaomin LI Ying
Journal of Computer Applications    2014, 34 (5): 1391-1396.   DOI: 10.11772/j.issn.1001-9081.2014.05.1391
Abstract455)      PDF (1071KB)(408)       Save

To improve the accuracy of bird sounds recognition in low Signal-to-Noise Ratio (SNR) environment, a new bird sounds recognition technology based on Radon Transform (RT) and Translation Invariant Discrete Wavelet Transform (TIDWT) from spectrogram after the noise reduction was proposed. First, an improved multi-band spectral subtraction method was presented to reduce the background noise. Second, short-time energy was used to detect silence of clean bird sound, and the silence was removed. Then, the bird sound was translated into spectrogram, RT and TIDWT were used to extract features. Finally, classification was achieved by Support Vector Machine (SVM) classifier. The experimental results show that the method can achieve better recognition effect even the SNR belows 10dB.

Reference | Related Articles | Metrics
Texture description based on local spectrum energy self-similarity matrix
YANG Hongbo HOU Xia
Journal of Computer Applications    2014, 34 (3): 790-796.   DOI: 10.11772/j.issn.1001-9081.2014.03.0790
Abstract510)      PDF (1166KB)(344)       Save

To deal with the texture detection and classification problem, a new texture description method based on self-similarity matrix of local spectrum energy of Gabor filters bank output was presented. Firstly, local frequency band and orientation information of texture template were obtained by convolving template with polar LogGabor filters bank. And then the self-similarities of different local frequency patches were measured and stored in a self-similarity matrix which was defined as the texture descriptor in this paper. At last this texture descriptor could be used in texture detection and classification. Due to the reflection of self-similarity level of different bands and orientations, the descriptor had lower dependency of Gabor filters bank parameters. In the tests, this descriptor produced better detection results than Homogeneous Texture Descriptor (HTD) and the other self-similarity descriptors and the accuracy of multi-texture classification could be up to 91%. The experimental results demonstrate that self-similarity matrix of local power spectrum is a kind of effective texture descriptor. The output of texture detection and classification can be used widely in the post texture analysis task, such as texture segmentation and recognition.

Related Articles | Metrics
Face recognition based on improved isometric feature mapping algorithm
LIU Jiamin WANG Huiyan ZHOU Xiaoli LUO Fulin
Journal of Computer Applications    2013, 33 (01): 76-79.   DOI: 10.3724/SP.J.1087.2013.00076
Abstract891)      PDF (645KB)(567)       Save
Isometric feature mapping (Isomap) algorithm is topologically unstable if the input data are distorted. Therefore, an improved Isomap algorithm was proposed. In the improved algorithm, Image Euclidean Distance (IMED) was embedded into Isomap algorithm. Firstly, the authors transformed images into image Euclidean Distance (ED) space through a linear transformation by introducing metric coefficients and metric matrix; then, Euclidean distance matrix of images in the transformed space was calculated to find the neighborhood graph and geodesic distance matrix; finally, low-dimensional embedding was constructed by MultiDimensional Scaling (MDS) algorithm. Experiments with the improved algorithm and nearest-neighbor classifier were conducted on ORL and Yale face database. The results show that the proposed algorithm outperforms Isomap with average recognition rate by 5.57% and 3.95% respectively, and the proposed algorithm has stronger robustness for face recognition with small changes.
Reference | Related Articles | Metrics
Service composition optimization approach based on affection ant colony algorithm
MA Hong-jiang ZHOU Xiang-bing
Journal of Computer Applications    2012, 32 (12): 3347-3352.   DOI: 10.3724/SP.J.1087.2012.03347
Abstract920)      PDF (925KB)(467)       Save
In the service computing mode, affection behaviour was employed to improve the efficiency of service composition. Firstly, an affection space was built to meet behaviour demands, and cognition was defined to reason state change of affection. In the change processing, mapping was done between affection and cognition, and emotion decay and emotion update were defined to maintain the stability of affective change. Secondly, affective mechanism was put into ant colony algorithm, which formed an affection ant colony algorithm, and the algorithm was applied to Web Service Modeling Ontology (WSMO) service composition. Finally, the paper adopted a Virtual Travel Agency (VTA) example under WSMO to show this approach was effective and feasible.
Related Articles | Metrics
Modified self-organizing map network for Euclidean travelling salesman problem
ZHOU Xiao-meng, XU Xiao-ming
Journal of Computer Applications    2012, 32 (07): 1962-1964.   DOI: 10.3724/SP.J.1087.2012.01962
Abstract1120)      PDF (471KB)(677)       Save
The Self-Organizing Map (SOM) was modified in this paper: the number of the neurons did not change with time and the neurons collectively maintained their mean to be the mean of the data point in the training phase. After training, every city was associated with a label of a neuron. Then there may be a problem that one or more than one cities have the same neuron. In order to avoid that, a dot labels index was adopted instead of the integer index. The virtue of this scheme is that different city has different index. Then the label would contribute to make sure the order of the city in the tour. Then the algorithm was applied to solve problems taken from a Traveling Salesman Problem Library (TSPLIB). The experimental results show that the proposed algorithm is feasible and effective.
Reference | Related Articles | Metrics
Switching based fuzzy filter for salt-and-pepper noise
GUO Yuan-hua HOU Xiao-rong
Journal of Computer Applications    2012, 32 (05): 1293-1295.  
Abstract1058)      PDF (2046KB)(755)       Save
Adaptive median filter (AM) was less efficient in detail-preserving when noise ratio is high. On the basis of standard median filter (SM) and AM this paper introduced a switching based fuzzy filter (SF). Noise pixels were identified by Max-Min operator, then according to the number of normal pixels, mean method or T-S fuzzy method was adopted to eliminate noise pixels. Experimental results show that this filter outperforms adaptive median filter in detail-protection and image smoothing. SF keeps a tradeoff between noise attenuation and detail-protection.
Reference | Related Articles | Metrics
Strategy of SaaS addressing and interrupt for software generation based on partitioning algorithm
ZHOU Xiang-bing YANG Xing-jiang MA Hong-jiang
Journal of Computer Applications    2012, 32 (02): 561-565.   DOI: 10.3724/SP.J.1087.2012.00561
Abstract1221)      PDF (717KB)(410)       Save
There are some SaaS problems for Web service and REST (Representational State Transfer) interfaces recognition in the software generation process. Therefore, an approach was proposed based on partitioning algorithm, which employed partitioning algorithm to implement function partition of SaaS and define difference nodes for difference functions. At the same time, the similarity between nodes was defined to accomplish partition, which improved the efficiency of SaaS functions. Secondly, according to the changing requirements, addressing and interrupt approach was presented to realize software generation of SaaS. Finally, an SaaS online sale software in Amazon cloud computing was analyzed, which approves that the approach is feasible and available.
Related Articles | Metrics
Real-time inspection and control system for six DOF platform based on INtime
HUANG Mang-mang ZHOU Xiao-jun WEI Yan-ding
Journal of Computer Applications    2011, 31 (10): 2858-2860.   DOI: 10.3724/SP.J.1087.2011.02858
Abstract1568)      PDF (481KB)(506)       Save
As for six Degree Of Freedom (DOF) platform, its inspection and control system should not only meet the requirement for real-time control, but also has powerful graphic interface. Because of the disadvantages of the existing system, this paper designed a real-time inspection and control system which can meet both requirements in an industrial computer based on INtime. In this system, the method of direct operating on data acquisition and control cards was adopted by INtime process, to obtain real-time performance; in the meantime, nonreal-time tasks were handled by Windows process. The test results of real running demonstrate that the system has high real-time performance, and the animation of platform in user interface is rendered fluently, and verifies the feasibility and effectiveness of the system.
Related Articles | Metrics
Novel face recognition method based on KPCA plus KDA
ZHOU Xiao-Yan ZHENG Wen-ming
Journal of Computer Applications   
Abstract2488)      PDF (674KB)(1204)       Save
Kernel Discriminant Anlaysis (KDA) and Kernel Principal Component Analysis (KPCA) are the nonlinear extensions of Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) respectively. In this paper, we presented a feature extraction algorithm by combing KDA and KPCA to extract reliable and robust features for recognition. Furthermore, a generalized nearest feature line (GNFL) method was also presented for constructing powerful classifier. The performance of the proposed method was demonstrated through real data.
Related Articles | Metrics